2 research outputs found

    Social Learning Systems: The Design of Evolutionary, Highly Scalable, Socially Curated Knowledge Systems

    Get PDF
    In recent times, great strides have been made towards the advancement of automated reasoning and knowledge management applications, along with their associated methodologies. The introduction of the World Wide Web peaked academicians’ interest in harnessing the power of linked, online documents for the purpose of developing machine learning corpora, providing dynamical knowledge bases for question answering systems, fueling automated entity extraction applications, and performing graph analytic evaluations, such as uncovering the inherent structural semantics of linked pages. Even more recently, substantial attention in the wider computer science and information systems disciplines has been focused on the evolving study of social computing phenomena, primarily those associated with the use, development, and analysis of online social networks (OSN\u27s). This work followed an independent effort to develop an evolutionary knowledge management system, and outlines a model for integrating the wisdom of the crowd into the process of collecting, analyzing, and curating data for dynamical knowledge systems. Throughout, we examine how relational data modeling, automated reasoning, crowdsourcing, and social curation techniques have been exploited to extend the utility of web-based, transactional knowledge management systems, creating a new breed of knowledge-based system in the process: the Social Learning System (SLS). The key questions this work has explored by way of elucidating the SLS model include considerations for 1) how it is possible to unify Web and OSN mining techniques to conform to a versatile, structured, and computationally-efficient ontological framework, and 2) how large-scale knowledge projects may incorporate tiered collaborative editing systems in an effort to elicit knowledge contributions and curation activities from a diverse, participatory audience

    The Intelligent Data Brokerage: A Utility-Enhancing Architecture for Algorithmic Anonymity Measures

    No full text
    The anonymization of widely distributed or open data has been a topic of great interest to privacy advocates in recent years. The goal of anonymization in these cases is to make data available to a larger audience, extending the utility of the data to new environments and evolving use cases without compromising the personal information of individuals whose data are being distributed. The resounding issue with such practices is that, with any anonymity measure, there is a trade-off between privacy and utility, where maximizing one carries a cost to the other. In this paper, the authors propose a framework for the utility-preserving release of anonymized data, based on the idea of intelligent data brokerages. These brokerages act as intermediaries between users requesting access to information resources and an existing database management system (DBMS). Through the use of a formal language for interpreting user information requests, customizable anonymization policies, and optional natural language processing (NLP) capabilities, data brokerages can maximize the utility of data in-context when responding to user inquiries
    corecore